Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available October 1, 2026
-
Abstract In this article, we exploit the similarities between Tikhonov regularization and Bayesian hierarchical models to propose a regularization scheme that acts like a distributed Tikhonov regularization where the amount of regularization varies from component to component, and a computationally efficient numerical scheme that is suitable for large-scale problems. In the standard formulation, Tikhonov regularization compensates for the inherent ill-conditioning of linear inverse problems by augmenting the data fidelity term measuring the mismatch between the data and the model output with a scaled penalty functional. The selection of the scaling of the penalty functional is the core problem in Tikhonov regularization. If an estimate of the amount of noise in the data is available, a popular way is to use the Morozov discrepancy principle, stating that the scaling parameter should be chosen so as to guarantee that the norm of the data fitting error is approximately equal to the norm of the noise in the data. A too small value of the regularization parameter would yield a solution that fits to the noise (too weak regularization) while a too large value would lead to an excessive penalization of the solution (too strong regularization). In many applications, it would be preferable to apply distributed regularization, replacing the regularization scalar by a vector valued parameter, so as to allow different regularization for different components of the unknown, or for groups of them. Distributed Tikhonov-inspired regularization is particularly well suited when the data have significantly different sensitivity to different components, or to promote sparsity of the solution. The numerical scheme that we propose, while exploiting the Bayesian interpretation of the inverse problem and identifying the Tikhonov regularization with the maximum a posteriori estimation, requires no statistical tools. A clever combination of numerical linear algebra and numerical optimization tools makes the scheme computationally efficient and suitable for problems where the matrix is not explicitly available. Moreover, in the case of underdetermined problems, passing through the adjoint formulation in data space may lead to substantial reduction in computational complexity.more » « less
-
Abstract We consider inverse problems estimating distributed parameters from indirect noisy observations through discretization of continuum models described by partial differential or integral equations. It is well understood that errors arising from the discretization can be detrimental for ill-posed inverse problems, as discretization error behaves as correlated noise. While this problem can be avoided with a discretization fine enough to decrease the modeling error level below that of the exogenous noise that is addressed, e.g. by regularization, the computational resources needed to deal with the additional degrees of freedom may increase so much as to require high performance computing environments. Following an earlier idea, we advocate the notion of the discretization as one of the unknowns of the inverse problem, which is updated iteratively together with the solution. In this approach, the discretization, defined in terms of an underlying metric, is refined selectively only where the representation power of the current mesh is insufficient. In this paper we allow the metrics and meshes to be anisotropic, and we show that this leads to significant reduction of memory allocation and computing time.more » « less
-
Abstract Bayesian particle filters (PFs) are a viable alternative to sampling methods such as Markov chain Monte Carlo methods to estimate model parameters and related uncertainties when the forward model is a dynamical system, and the data are time series that depend on the state vector. PF techniques are particularly attractive when the dimensionality of the state space is large and the numerical solution of the dynamical system over the time interval corresponding to the data is time consuming. Moreover, information contained in the PF solution can be used to infer on the sensitivity of the unknown parameters to different temporal segments of the data. This, in turn, can guide the design of more efficient and effective data collection procedures. In this article the PF method is applied to the problem of estimating cell membrane permeability to gases from pH measurements on or near the cell membrane. The forward model in this case comprises a spatially distributed system of coupled reaction–diffusion differential equations. The high dimensionality of the state space and the need to account for the micro-environment created by the pH electrode measurement device are additional challenges that are addressed by the solution method.more » « less
-
Abstract Dictionary learning, aiming at representing a signal in terms of the atoms of a dictionary, has gained popularity in a wide range of applications, including, but not limited to, image denoising, face recognition, remote sensing, medical imaging and feature extraction. Dictionary learning can be seen as a possible data-driven alternative to solve inverse problems by identifying the data with possible outputs that are either generated numerically using a forward model or the results of earlier observations of controlled experiments. Sparse dictionary learning is particularly interesting when the underlying signal is known to be representable in terms of a few vectors in a given basis. In this paper, we propose to use hierarchical Bayesian models for sparse dictionary learning that can capture features of the underlying signals, e.g. sparse representation and nonnegativity. The same framework can be employed to reduce the dimensionality of an annotated dictionary through feature extraction, thus reducing the computational complexity of the learning task. Computed examples where our algorithms are applied to hyperspectral imaging and classification of electrocardiogram data are also presented.more » « less
-
Abstract The transport of gases across cell membranes plays a key role in many different cell functions, from cell respiration to pH control. Mathematical models play a central role in understanding the factors affecting gas transport through membranes, and are the tool needed for testing the novel hypothesis of the preferential crossing through specific gas channels. Since the surface pH of cell membrane is regulated by the transport of gases such as CO 2 and NH 3 , inferring the membrane properties can be done indirectly from pH measurements. Numerical simulations based on recent models of the surface pH support the hypothesis that the presence of a measurement device, a liquid-membrane pH sensitive electrode on the cell surface may disturb locally the pH, leading to a systematic bias in the measured values. To take this phenomenon into account, it is necessary to equip the model with a description of the micro-environment created by the pH electrode. In this work we propose a novel, computationally lightweight numerical algorithm to simulate the surface pH data. The effect of different parameters of the model on the output are investigated through a series of numerical experiments with a physical interpretation.more » « less
An official website of the United States government
